ar model
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.93)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Germany > Berlin (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- (9 more...)
grangersearch: An R Package for Exhaustive Granger Causality Testing with Tidyverse Integration
Understanding causal relationships between time series variables is a fundamental problem in economics, finance, neuroscience, and many other fields. While true causality is philosophically complex and difficult to establish from observational data alone, Granger (1969) proposed a practical, testable notion of causality based on predictability: a variable X is said to "Granger-cause" another variable Y if past values of X contain information that helps predict Y beyond what is contained in past values of Y alone. Granger causality testing has found applications across diverse domains. In macroeconomics, Sims (1972) famously applied the technique to study money-income relationships, while Kraft and Kraft (1978) pioneered its use in energy economics. Financial market researchers including Hiemstra and Jones (1994) have extended the methodology to study price-volume dynamics, and neuroscientists have adapted Granger causality for brain connectivity analysis (Seth, Barrett, and Barnett 2015). The statistical foundations rest on vector autoregressive (V AR) models (Sims 1980), with comprehensive treatments available in Lütkepohl (2005) and discussions of causal interpretation in Peters, Janzing, and Schölkopf (2017). Despite its popularity, implementing Granger causality tests in R (R Core Team 2024) remains cumbersome for applied researchers.
- Europe > Austria > Vienna (0.14)
- North America > Canada (0.07)
- Europe > Greece > Ionian Islands > Corfu (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Banking & Finance (1.00)
Rethinking Training Dynamics in Scale-wise Autoregressive Generation
Zhou, Gengze, Ge, Chongjian, Tan, Hao, Liu, Feng, Hong, Yicong
Recent advances in autoregressive (AR) generative models have produced increasingly powerful systems for media synthesis. Among them, next-scale prediction has emerged as a popular paradigm, where models generate images in a coarse-to-fine manner. However, scale-wise AR models suffer from exposure bias, which undermines generation quality. We identify two primary causes of this issue: (1) train-test mismatch, where the model must rely on its own imperfect predictions during inference, and (2) imbalance in scale-wise learning difficulty, where certain scales exhibit disproportionately higher optimization complexity. Through a comprehensive analysis of training dynamics, we propose Self-Autoregressive Refinement (SAR) to address these limitations. SAR introduces a Stagger-Scale Rollout (SSR) mechanism that performs lightweight autoregressive rollouts to expose the model to its own intermediate predictions, thereby aligning train-test patterns, and a complementary Contrastive Student-Forcing Loss (CSFL) that provides adequate supervision for self-generated contexts to ensure stable training. Experimental results show that applying SAR to pretrained AR models consistently improves generation quality with minimal computational overhead. For instance, SAR yields a 5.2% FID reduction on FlexVAR-d16 trained on ImageNet 256 within 10 epochs (5 hours on 32xA100 GPUs). Given its efficiency, scalability, and effectiveness, we expect SAR to serve as a reliable post-training method for visual autoregressive generation.
Learning Conditional Independence Differential Graphs From Time-Dependent Data
Estimation of differences in conditional independence graphs (CIGs) of two time series Gaussian graphical models (TSGGMs) is investigated where the two TSGGMs are known to have similar structure. The TSGGM structure is encoded in the inverse power spectral density (IPSD) of the time series. In several existing works, one is interested in estimating the difference in two precision matrices to characterize underlying changes in conditional dependencies of two sets of data consisting of independent and identically distributed (i.i.d.) observations. In this paper we consider estimation of the difference in two IPSDs to characterize the underlying changes in conditional dependencies of two sets of time-dependent data. Our approach accounts for data time dependencies unlike past work. We analyze a penalized D-trace loss function approach in the frequency domain for differential graph learning, using Wirtinger calculus. We consider both convex (group lasso) and non-convex (log-sum and SCAD group penalties) penalty/regularization functions. An alternating direction method of multipliers (ADMM) algorithm is presented to optimize the objective function. We establish sufficient conditions in a high-dimensional setting for consistency (convergence of the inverse power spectral density to true value in the Frobenius norm) and graph recovery. Both synthetic and real data examples are presented in support of the proposed approaches. In synthetic data examples, our log-sum-penalized differential time-series graph estimator significantly outperformed our lasso based differential time-series graph estimator which, in turn, significantly outperformed an existing lasso-penalized i.i.d. modeling approach, with $F_1$ score as the performance metric.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- North America > United States > Alabama > Lee County > Auburn (0.04)
- North America > Costa Rica (0.04)
- (4 more...)
- Health & Medicine (1.00)
- Banking & Finance > Trading (0.68)
A Survey on Diffusion Language Models
Li, Tianyi, Chen, Mingda, Guo, Bowei, Shen, Zhiqiang
A different approach, Reparameter-ized Discrete diffusion Models (RDMs) [62], establishes an alternative formulation for the reverse process, which simplifies the training objective to a weighted cross-entropy loss. This enables more flexible and adaptive decoding strategies, leading to significant performance gains over previous discrete diffusion models. Similarly, MD4 [63] derives a simple weighted integral of cross-entropy losses as the continuous-time variational objective of masked diffusion models, providing a simple and generalized framework for training DLMs. Another analogous approach is MDLM [64], which introduces a simplified, Rao-Blackwellized objective that takes the form of a weighted average of masked language modeling losses. Diffusion-LLM [65] demonstrates the scalability of DLMs by adapting pre-trained masked language models to diffusion paradigm and further task-specific finetuning and instruction finetuning, unlocking their versatility in solving general language tasks. Diffusion-NAT [66] unifies a discrete diffusion model with a PLM by reformulating the denoising process as a non-autoregressive masked token recovery task, allowing BART to act as an effective denoiser. Plaid [67] is the first diffusion language model trained to maximize data likelihood, demonstrating through scaling laws that it can outperform autoregressive models like GPT-2 on standard benchmarks. T o improve the training objective, SEDD [68] introduces a score entropy loss to directly learn the ratios of the data distribution, which serves as a discrete extension of score matching. Reparameterized Absorbing Discrete Diffusion (RADD) [69] reveals that the concrete score in absorbing diffusion can be expressed as a time-independent conditional probability of the clean data, multiplied by an analytic, time-dependent scalar.
- Overview (1.00)
- Research Report > New Finding (0.45)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.34)
Spanning Tree Autoregressive Visual Generation
Lee, Sangkyu, Lee, Changho, Han, Janghoon, Song, Hosung, You, Tackgeun, Lim, Hwasup, Choi, Stanley Jungkyu, Lee, Honglak, Yu, Youngjae
W e present Spanning Tree Autoregressive (STAR) modeling, which can incorporate prior knowledge of images, such as center bias and locality, to maintain sampling performance while also providing sufficiently flexible sequence orders to accommodate image editing at inference. Approaches that expose randomly permuted sequence orders to conventional autoregressive (AR) models in visual generation for bidirectional context either suffer from a decline in performance or compromise the flexibility in sequence order choice at inference. Instead, STAR utilizes traversal orders of uniform spanning trees sampled in a lattice defined by the positions of image patches. Traversal orders are obtained through breadth-first search, allowing us to efficiently construct a spanning tree whose traversal order ensures that the connected partial observation of the image appears as a prefix in the sequence through rejection sampling. Through the tailored yet structured randomized strategy compared to random permutation, STAR preserves the capability of postfix completion while maintaining sampling performance without any significant changes to the model architecture widely adopted in the language AR modeling.
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.96)